338 research outputs found
Simultaneous Amplitude and Phase Measurement for Periodic Optical Signals Using Time-Resolved Optical Filtering
Time-resolved optical filtering (TROF) measures the spectrogram or sonogram
by a fast photodiode followed a tunable narrowband optical filter. For periodic
signal and to match the sonogram, numerical TROF algorithm is used to find the
original complex electric field or equivalently both the amplitude and phase.
For phase-modulated optical signals, the TROF algorithm is initiated using the
craters and ridges of the sonogram.Comment: 10 pages, 5 figure
Self-ICL: Zero-Shot In-Context Learning with Self-Generated Demonstrations
Large language models (LMs) have exhibited superior in-context learning (ICL)
ability to adopt to target tasks by prompting with a few input-output
demonstrations. Towards better ICL, different methods are proposed to select
representative demonstrations from existing training corpora. However, such a
setting is not aligned with real-world practices, as end-users usually query
LMs without accesses to demonstration pools. Inspired by evidence suggesting
LMs' zero-shot capabilities are underrated, and the role of demonstrations are
primarily for exposing models' intrinsic functionalities, we introduce
Self-ICL, a simple framework for zero-shot ICL. Given a test input, Self-ICL
first prompts the model to generate pseudo-inputs. Next, the model predicts
pseudo-labels for the pseudo-inputs via zero-shot prompting. Finally, we
construct pseudo-demonstrations from pseudo-input-label pairs, and perform ICL
for the test input. Evaluation on BIG-Bench Hard shows Self-ICL steadily
surpasses zero-shot and zero-shot chain-of-thought baselines on head-to-head
and all-task average performance. Our findings suggest the possibility to
bootstrap LMs' intrinsic capabilities towards better zero-shot performance.Comment: Work in progres
Large Language Models Perform Diagnostic Reasoning
We explore the extension of chain-of-thought (CoT) prompting to medical
reasoning for the task of automatic diagnosis. Motivated by doctors' underlying
reasoning process, we present Diagnostic-Reasoning CoT (DR-CoT). Empirical
results demonstrate that by simply prompting large language models trained only
on general text corpus with two DR-CoT exemplars, the diagnostic accuracy
improves by 15% comparing to standard prompting. Moreover, the gap reaches a
pronounced 18% in out-domain settings. Our findings suggest expert-knowledge
reasoning in large language models can be elicited through proper promptings.Comment: Accepted as a Tiny Paper at ICLR 2023 (10 pages, 5 figures
Fidelity-Enriched Contrastive Search: Reconciling the Faithfulness-Diversity Trade-Off in Text Generation
In this paper, we address the hallucination problem commonly found in natural
language generation tasks. Language models often generate fluent and convincing
content but can lack consistency with the provided source, resulting in
potential inaccuracies. We propose a new decoding method called
Fidelity-Enriched Contrastive Search (FECS), which augments the contrastive
search framework with context-aware regularization terms. FECS promotes tokens
that are semantically similar to the provided source while penalizing
repetitiveness in the generated text. We demonstrate its effectiveness across
two tasks prone to hallucination: abstractive summarization and dialogue
generation. Results show that FECS consistently enhances faithfulness across
various language model sizes while maintaining output diversity comparable to
well-performing decoding algorithms.Comment: Accepted as a short paper at EMNLP 202
Fuzzy Risk-Based Life Cycle Assessment for Estimating Environmental Aspects in EMS
Environmental aspects plays a central role in environmental management system (EMS) because it is the basis for the identification of an organization-s environmental targets. The
existing methods for the assessment of environmental aspects are grouped into three categories: risk assessment-based (RA-based),
LCA-based and criterion-based methods. To combine the benefits of
these three categories of research, this study proposes an integrated framework, combining RA-, LCA- and criterion-based methods. The
integrated framework incorporates LCA techniques for the identification of the causal linkage for aspect, pathway, receptor and
impact, uses fuzzy logic to assess aspects, considers fuzzy conditions,
in likelihood assessment, and employs a new multi-criteria decision analysis method - multi-criteria and multi-connection comprehensive
assessment (MMCA) - to estimate significant aspects in EMS. The proposed model is verified, using a real case study and the results show
that this method successfully prioritizes the environmental aspects
ZARA: Improving Few-Shot Self-Rationalization for Small Language Models
Language models (LMs) that jointly generate end-task answers as well as
free-text rationales are known as self-rationalization models. Recent works
demonstrate great performance gain for self-rationalization by few-shot
prompting LMs with rationale-augmented exemplars. However, the ability to
benefit from explanations only emerges with large-scale LMs, which have poor
accessibility. In this work, we explore the less-studied setting of leveraging
explanations for small LMs to improve few-shot self-rationalization. We first
revisit the relationship between rationales and answers. Inspired by the
implicit mental process of how human beings assess explanations, we present a
novel approach, Zero-shot Augmentation of Rationale-Answer pairs (ZARA), to
automatically construct pseudo-parallel data for self-training by reducing the
problem of plausibility judgement to natural language inference. Experimental
results show ZARA achieves SOTA performance on the FEB benchmark, for both the
task accuracy and the explanation metric. In addition, we conduct human and
quantitative evaluation validating ZARA's ability to automatically identify
plausible and accurate rationale-answer pairs.Comment: Accepted as a long paper at EMNLP Findings 202
- …